Agentic AI: The next privacy battle ground?
Posted: February 17, 2025
Through the constant evolution of Artificial Intelligence (AI), one subject is coming to the foreground as a topic of intense debate by privacy and business ethics professionals: Agentic AI. The reasons behind this argument are all the usual privacy and security concerns about AI generally, plus the additive apprehension related to agentic AI’s independence and its interactions with privacy requirements related to automated decisions.
What is agentic AI?
This subcategory of AI goes beyond the output-creating goal of generative AI and into the (scarier) realm of proactive, autonomous decision making. While generative AI can create sophisticated content, it does so in response to human prompts and within pre-set parameters. For example, a user can give instructions like “build a PowerPoint in a formal business style that compares population increases in Argentina, Chile, and Peru over the last three years and explains possible reasons for those increases.” In this way, the user prompts the activity, sets the purpose, and provides instructions regarding style, length, data sources, and other parameters.
Agentic AI, on the other hand, is much wilder and more independent of human intervention. Agentic AI can “…act autonomously to achieve goals without the need for constant human guidance. The agentic AI system understands what the goal or vision of the user is and the context to the problem they are trying to solve.” In contrast with the generative AI goal of creating content based on human commands, parameters, and direction, agentic AI focuses on making decisions. These decisions can be complex, and agentic AI tools do not have to rely on human prompts before starting. Agentic AI tools can even take autonomous actions.
Take this scenario: To improve the customer experience, a company employs an agentic AI tool to watch how a web visitor interacts across multiple channels – social sites, the company’s own and other third-party websites, customer support channels, and apps. Without human intervention, that agentic AI tool can customize that individual’s experiences with the company in real time, including interacting with the customer directly. The agentic AI tool can also proactively trigger other processes within the company, such as returns, discount offers, and alternate shipping methods.
The benefits of agentic AI are easy to see. Just like most types of AI, agentic AI can sift through and make sense of enormous amounts of data. Unlike other types of AI, however, rather than following sets of human-directed rules and with human intervention, agentic AI can run independently. Because of its sophistication in understanding context, as well as the reliability and quality of the data on which it runs, agentic AI is less likely to result in hallucinations. Moreover, while other types of AI systems only combine, make sense of, and display information, agentic AI can go a step further by acting towards a goal.
Privacy concerns surrounding agentic AI
The core problem with agentic AI over and above general worries about AI, of course, is trust. As one source describes, our relationship with AI is a little like a parent’s relationship with a child. The parent first sets guardrails and rules for their child, because that child’s emotional and cognitive development is not at the point where the child can operate ethically and effectively on their own. Later, as the parent builds trust in the child’s abilities, the child can begin to operate more independently.
Similarly, we can only begin to remove human direction and intervention and give AI the ability to establish its own rules and guardrails if we have confidence in the ethical and practical outcome of that autonomy. The challenge with AI generally and agentic AI specifically is that it is only beginning to prove that it can make good decisions in a nuanced environment.
Additional concerns related to privacy add to the level of caution about simply letting AI loose. Agentic AI, like other AI tools, demands copious amounts of data on which to train. These large data sets may not have data subject consent for their use. The company in question also may not have another legal basis to cover the training activity. Another privacy concern about AI training practices relates to the fact that training data will become part of the AI tool and its implementation, so data (even sensitive data) may show up in outputs on an on-going basis.
Regulators also express privacy and other concerns about AI. Many jurisdictions have either enacted laws or guidelines aimed at addressing legal basis, security, and discrimination/bias. Moreover, many general privacy laws limit automated decision making, such as allowing for data subjects to opt out, requiring specific notices, and forcing organizations to conduct data protection impact assessments on these activities. Given that one of the main purposes of agentic AI is to make decisions without human involvement, these additional rules around automated decisions, plus general privacy law requirements and AI-specific rules/guidance, combine to make for a challenging environment for the activity.
Agentic AI best practices
Despite these difficulties, however, it may be worth it. As noted previously, agentic AI can handle more nuanced situations and accommodate even moral, ethical, and cultural considerations. It makes fewer mistakes, and applications in customer support, business operations, and customer experience personalization bring an enormous potential upside to using that technology.
As we build trust in agentic AI’s ability to make sound ethical and practical decisions, there are some practices that can both increase compliance and decrease risk.
- Trust, but verify: Even though agentic AI is designed to run without human intervention, that does not mean that humans cannot be involved. As in the child-parent analogy above, assuming that agentic AI is a teenager at this point, it may be reasonable to still set up human reviews, checks, and monitoring.
- Deeply understand training models: Whether an organization buys or builds its agentic AI tool, it will be important to understand how, and on what data, that tool trained. Legal basis and data subject consent will be important considerations, as are the questions of bias and decision validity.
- Provide clear notice: Not only do many relevant laws require notice that a company is using AI for a particular purpose, but customers can also feel duped if they believed that they were interacting with a human being but were actually interacting with technology. Providing clear, just-in-time notice about the role agentic AI is playing in processes, and especially interactions, will increase compliance and enhance customer trust.
- Provide a strong alternative process and opt out mechanism: Many regulators want consumers to be able to opt out of purely technology-driven decisions, particularly those decisions that have a legal or otherwise important impact on that consumer. Some consumers may also disagree with an automated decision and want to work with a real human to resolve the complaint. With both pressures in mind, allowing a consumer to opt out of an automated decision-making process in favor of a human-driven one, and providing a human-driven process for recourse once the decision is made, will go a long way to both compliance and good customer experience.
Summary
In summary, agentic AI shows enormous potential for business and customer experience benefits. Its ability to incorporate moral considerations and operate with independence in nuanced situations makes it more trustworthy and practical in more applications. However, the very nature of agentic AI’s independence and action orientation mean that privacy professionals have a complex set of legal and practical obligations through which to work. Understanding the privacy and bias sensitivity of the training model, providing clear notice and the ability to opt out/contest decisions, and of course inserting human oversight into the process will go a long way to making agentic AI a worthwhile investment.